155 research outputs found
Components and Interfaces of a Process Management System for Parallel Programs
Parallel jobs are different from sequential jobs and require a different type
of process management. We present here a process management system for parallel
programs such as those written using MPI. A primary goal of the system, which
we call MPD (for multipurpose daemon), is to be scalable. By this we mean that
startup of interactive parallel jobs comprising thousands of processes is
quick, that signals can be quickly delivered to processes, and that stdin,
stdout, and stderr are managed intuitively. Our primary target is parallel
machines made up of clusters of SMPs, but the system is also useful in more
tightly integrated environments. We describe how MPD enables much faster
startup and better runtime management of parallel jobs. We show how close
control of stdio can support the easy implementation of a number of convenient
system utilities, even a parallel debugger. We describe a simple but general
interface that can be used to separate any process manager from a parallel
library, which we use to keep MPD separate from MPICH.Comment: 12 pages, Workshop on Clusters and Computational Grids for Scientific
Computing, Sept. 24-27, 2000, Le Chateau de Faverges de la Tour, Franc
Methods to Model-Check Parallel Systems Software
We report on an effort to develop methodologies for formal verification of
parts of the Multi-Purpose Daemon (MPD) parallel process management system. MPD
is a distributed collection of communicating processes. While the individual
components of the collection execute simple algorithms, their interaction leads
to unexpected errors that are difficult to uncover by conventional means. Two
verification approaches are discussed here: the standard model checking
approach using the software model checker SPIN and the nonstandard use of a
general-purpose first-order resolution-style theorem prover OTTER to conduct
the traditional state space exploration. We compare modeling methodology and
analyze performance and scalability of the two methods with respect to
verification of MPD.Comment: 12 pages, 3 figures, 1 tabl
Analyzing high energy physics data using database computing: Preliminary report
A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger
The Aurora Or-Parallel Prolog system
Aurora is a prototype or-parallel implementation of the full Prolog language for shared-memory multiprocessors, developed as part of an informal research collaboration known as the "Gigalips Project". It currently runs on Sequent and Encore machines. It has been constructed by adapting Sicstus Prolog, a fast, portable, sequential Prolog system. The techniques for constructing a portable multiprocessor version follow those pioneered in a predecessor system, ANL-WAM. The SRI model was adopted as the means to extend the Sicstus Prolog engine for or-parallel operation. We describe the design and main implementation features of the current Aurora system, and present some experimental results. For a range of benchmarks, Aurora on a 20-processor Sequent Symmetry is 4 to 7 times faster than Quintus Prolog on a Sun 3/75. Good performance is also reported on some large-scale Prolog applications
Performance analysis and optimization of in-situ integration of simulation with data analysis: zipping applications up
This paper targets an important class of applications that requires combining HPC simulations with data analysis for online or real-time scientific discovery. We use the state-of-the-art parallel-IO and data-staging libraries to build simulation-time data analysis workflows, and conduct performance analysis with real-world applications of computational fluid dynamics (CFD) simulations and molecular dynamics (MD) simulations. Driven by in-depth performance inefficiency analysis, we design an end-to-end application-level approach to eliminating the interlocks and synchronizations existent in the present methods. Our new approach employs both task parallelism and pipeline parallelism to reduce synchronizations effectively. In addition, we design a fully asynchronous, fine-grain, and pipelining runtime system, which is named Zipper. Zipper is a multi-threaded distributed runtime system and executes in a layer below the simulation and analysis applications. To further reduce the simulation application's stall time and enhance the data transfer performance, we design a concurrent data transfer optimization that uses both HPC network and parallel file system for improved bandwidth. The scalability of the Zipper system has been verified by a performance model and various empirical large scale experiments. The experimental results on an Intel multicore cluster as well as a Knight Landing HPC system demonstrate that the Zipper based approach can outperform the fastest state-of-the-art I/O transport library by up to 220% using 13,056 processor cores
- …